multi-value rule
Reviews: Multi-value Rule Sets for Interpretable Classification with Feature-Efficient Representations
The paper proposes learning sets of decision rules that can express the disjunction of feature values in atoms of the rules, for example, IF color yellow OR red, THEN stop. The emphasis is on interpretability, and the paper argues that these multi-value rules are more interpretable than similarly trained decision sets that do not support multi-value rules. Following prior work, the paper proposes placing a prior distribution over the parameters of the decision set, such as the number of rules and the maximum number of atoms in each rule. The paper derives bounds on the resulting distribution to accelerate a simulated annealing learning algorithm. Experiments show that multi-value rule sets are as accurate as other classifiers proposed as interpretable model classes, such as Bayesian rule sets on benchmark decision problems.